Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 58
Filtrar
1.
Methods Mol Biol ; 2787: 3-38, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38656479

RESUMEN

In this chapter, we explore the application of high-throughput crop phenotyping facilities for phenotype data acquisition and the extraction of significant information from the collected data through image processing and data mining methods. Additionally, the construction and outlook of crop phenotype databases are introduced and the need for global cooperation and data sharing is emphasized. High-throughput crop phenotyping significantly improves accuracy and efficiency compared to traditional measurements, making significant contributions to overcoming bottlenecks in the phenotyping field and advancing crop genetics.


Asunto(s)
Productos Agrícolas , Minería de Datos , Procesamiento de Imagen Asistido por Computador , Fenotipo , Productos Agrícolas/genética , Productos Agrícolas/crecimiento & desarrollo , Minería de Datos/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Manejo de Datos/métodos , Ensayos Analíticos de Alto Rendimiento/métodos
2.
Plant Phenomics ; 6: 0139, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38550661

RESUMEN

Oilseed rape is an important oilseed crop planted worldwide. Maturity classification plays a crucial role in enhancing yield and expediting breeding research. Conventional methods of maturity classification are laborious and destructive in nature. In this study, a nondestructive classification model was established on the basis of hyperspectral imaging combined with machine learning algorithms. Initially, hyperspectral images were captured for 3 distinct ripeness stages of rapeseed, and raw spectral data were extracted from the hyperspectral images. The raw spectral data underwent preprocessing using 5 pretreatment methods, namely, Savitzky-Golay, first derivative, second derivative (D2nd), standard normal variate, and detrend, as well as various combinations of these methods. Subsequently, the feature wavelengths were extracted from the processed spectra using competitive adaptive reweighted sampling, successive projection algorithm (SPA), iterative spatial shrinkage of interval variables (IVISSA), and their combination algorithms, respectively. The classification models were constructed using the following algorithms: extreme learning machine, k-nearest neighbor, random forest, partial least-squares discriminant analysis, and support vector machine (SVM) algorithms, applied separately to the full wavelength and the feature wavelengths. A comparative analysis was conducted to evaluate the performance of diverse preprocessing methods, feature wavelength selection algorithms, and classification models, and the results showed that the model based on preprocessing-feature wavelength selection-machine learning could effectively predict the maturity of rapeseed. The D2nd-IVISSA-SPA-SVM model exhibited the highest modeling performance, attaining an accuracy rate of 97.86%. The findings suggest that rapeseed maturity can be rapidly and nondestructively ascertained through hyperspectral imaging.

3.
Plant Biotechnol J ; 22(4): 802-818, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38217351

RESUMEN

The microphenotype plays a key role in bridging the gap between the genotype and the complex macro phenotype. In this article, we review the advances in data acquisition and the intelligent analysis of plant microphenotyping and present applications of microphenotyping in plant science over the past two decades. We then point out several challenges in this field and suggest that cross-scale image acquisition strategies, powerful artificial intelligence algorithms, advanced genetic analysis, and computational phenotyping need to be established and performed to better understand interactions among genotype, environment, and management. Microphenotyping has entered the era of Microphenotyping 3.0 and will largely advance functional genomics and plant science.


Asunto(s)
Inteligencia Artificial , Genómica , Fenotipo , Genómica/métodos , Genotipo , Plantas/genética
4.
Plant Biotechnol J ; 21(10): 1966-1977, 2023 10.
Artículo en Inglés | MEDLINE | ID: mdl-37392004

RESUMEN

Dissecting the genetic basis of complex traits such as dynamic growth and yield potential is a major challenge in crops. Monitoring the growth throughout growing season in a large wheat population to uncover the temporal genetic controls for plant growth and yield-related traits has so far not been explored. In this study, a diverse wheat panel composed of 288 lines was monitored by a non-invasive and high-throughput phenotyping platform to collect growth traits from seedling to grain filling stage and their relationship with yield-related traits was further explored. Whole genome re-sequencing of the panel provided 12.64 million markers for a high-resolution genome-wide association analysis using 190 image-based traits and 17 agronomic traits. A total of 8327 marker-trait associations were detected and clustered into 1605 quantitative trait loci (QTLs) including a number of known genes or QTLs. We identified 277 pleiotropic QTLs controlling multiple traits at different growth stages which revealed temporal dynamics of QTLs action on plant development and yield production in wheat. A candidate gene related to plant growth that was detected by image traits was further validated. Particularly, our study demonstrated that the yield-related traits are largely predictable using models developed based on i-traits and provide possibility for high-throughput early selection, thus to accelerate breeding process. Our study explored the genetic architecture of growth and yield-related traits by combining high-throughput phenotyping and genotyping, which further unravelled the complex and stage-specific contributions of genetic loci to optimize growth and yield in wheat.


Asunto(s)
Estudio de Asociación del Genoma Completo , Triticum , Triticum/genética , Fitomejoramiento , Fenotipo , Sitios de Carácter Cuantitativo/genética
5.
Plant Methods ; 19(1): 75, 2023 Jul 29.
Artículo en Inglés | MEDLINE | ID: mdl-37516875

RESUMEN

BACKGROUND: Verticillium wilt is the major disease of cotton, which would cause serious yield reduction and economic losses, and the identification of cotton verticillium wilt is of great significance to cotton research. However, the traditional method is still manual, which is subjective, inefficient, and labor-intensive, and therefore, this study has proposed a novel method for cotton verticillium wilt identification based on spectral and image feature fusion. The cotton hyper-spectral images have been collected, while the regions of interest (ROI) have been extracted as samples including 499 healthy leaves and 498 diseased leaves, and the average spectral information and RGB image of each sample were obtained. In spectral feature processing, the preprocessing methods including Savitzky-Golay smoothing (SG), multiplicative scatter correction (MSC), de-trending (DT) and mean normalization (MN) algorithms have been adopted, while the feature band extraction methods have adopted principal component analysis (PCA) and successive projections algorithm (SPA). In RGB image feature processing, the EfficientNet was applied to build classification model and 16 image features have been extracted from the last convolutional layer. And then, the obtained spectral and image features were fused, while the classification model was established by support vector machine (SVM) and back propagation neural network (BPNN). Additionally, the spectral full bands and feature bands were used as comparison for SVM and BPNN classification respectively. RESULT: The results showed that the average accuracy of EfficientNet for cotton verticillium wilt identification was 93.00%. By spectral full bands, SG-MSC-BPNN model obtained the better performance with classification accuracy of 93.78%. By feature bands, SG-MN-SPA-BPNN model obtained the better performance with classification accuracy of 93.78%. By spectral and image fused features, SG-MN-SPA-FF-BPNN model obtained the best performance with classification accuracy of 98.99%. CONCLUSIONS: The study demonstrated that it was feasible and effective to use fused spectral and image features based on hyper-spectral imaging to improve identification accuracy of cotton verticillium wilt. The study provided theoretical basis and methods for non-destructive and accurate identification of cotton verticillium wilt.

6.
Sensors (Basel) ; 23(14)2023 Jul 12.
Artículo en Inglés | MEDLINE | ID: mdl-37514625

RESUMEN

China is the largest producer and consumer of rice, and the classification of filled/unfilled rice grains is of great significance for rice breeding and genetic analysis. The traditional method for filled/unfilled rice grain identification was generally manual, which had the disadvantages of low efficiency, poor repeatability, and low precision. In this study, we have proposed a novel method for filled/unfilled grain classification based on structured light imaging and Improved PointNet++. Firstly, the 3D point cloud data of rice grains were obtained by structured light imaging. And then the specified processing algorithms were developed for the single grain segmentation, and data enhancement with normal vector. Finally, the PointNet++ network was improved by adding an additional Set Abstraction layer and combining the maximum pooling of normal vectors to realize filled/unfilled rice grain point cloud classification. To verify the model performance, the Improved PointNet++ was compared with six machine learning methods, PointNet and PointConv. The results showed that the optimal machine learning model is XGboost, with a classification accuracy of 91.99%, while the classification accuracy of Improved PointNet++ was 98.50% outperforming the PointNet 93.75% and PointConv 92.25%. In conclusion, this study has demonstrated a novel and effective method for filled/unfilled grain recognition.

7.
Plant Phenomics ; 5: 0064, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37469555

RESUMEN

The green fraction (GF), which is the fraction of green vegetation in a given viewing direction, is closely related to the light interception ability of the crop canopy. Monitoring the dynamics of GF is therefore of great interest for breeders to identify genotypes with high radiation use efficiency. The accuracy of GF estimation depends heavily on the quality of the segmentation dataset and the accuracy of the image segmentation method. To enhance segmentation accuracy while reducing annotation costs, we developed a self-supervised strategy for deep learning semantic segmentation of rice and wheat field images with very contrasting field backgrounds. First, the Digital Plant Phenotyping Platform was used to generate large, perfectly labeled simulated field images for wheat and rice crops, considering diverse canopy structures and a wide range of environmental conditions (sim dataset). We then used the domain adaptation model cycle-consistent generative adversarial network (CycleGAN) to bridge the reality gap between the simulated and real images (real dataset), producing simulation-to-reality images (sim2real dataset). Finally, 3 different semantic segmentation models (U-Net, DeepLabV3+, and SegFormer) were trained using 3 datasets (real, sim, and sim2real datasets). The performance of the 9 training strategies was assessed using real images captured from various sites. The results showed that SegFormer trained using the sim2real dataset achieved the best segmentation performance for both rice and wheat crops (rice: Accuracy = 0.940, F1-score = 0.937; wheat: Accuracy = 0.952, F1-score = 0.935). Likewise, favorable GF estimation results were obtained using the above strategy (rice: R2 = 0.967, RMSE = 0.048; wheat: R2 = 0.984, RMSE = 0.028). Compared with SegFormer trained using a real dataset, the optimal strategy demonstrated greater superiority for wheat images than for rice images. This discrepancy can be partially attributed to the differences in the backgrounds of the rice and wheat fields. The uncertainty analysis indicated that our strategy could be disrupted by the inhomogeneity of pixel brightness and the presence of senescent elements in the images. In summary, our self-supervised strategy addresses the issues of high cost and uncertain annotation accuracy during dataset creation, ultimately enhancing GF estimation accuracy for rice and wheat field images. The best weights we trained in wheat and rice are available: https://github.com/PheniX-Lab/sim2real-seg.

8.
Plant Phenomics ; 5: 0058, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37304154

RESUMEN

As one of the most widely grown crops in the world, rice is not only a staple food but also a source of calorie intake for more than half of the world's population, occupying an important position in China's agricultural production. Thus, determining the inner potential connections between the genetic mechanisms and phenotypes of rice using dynamic analyses with high-throughput, nondestructive, and accurate methods based on high-throughput crop phenotyping facilities associated with rice genetics and breeding research is of vital importance. In this work, we developed a strategy for acquiring and analyzing 58 image-based traits (i-traits) during the whole growth period of rice. Up to 84.8% of the phenotypic variance of the rice yield could be explained by these i-traits. A total of 285 putative quantitative trait loci (QTLs) were detected for the i-traits, and principal components analysis was applied on the basis of the i-traits in the temporal and organ dimensions, in combination with a genome-wide association study that also isolated QTLs. Moreover, the differences among the different population structures and breeding regions of rice with regard to its phenotypic traits demonstrated good environmental adaptability, and the crop growth and development model also showed high inosculation in terms of the breeding-region latitude. In summary, the strategy developed here for the acquisition and analysis of image-based rice phenomes can provide a new approach and a different thinking direction for the extraction and analysis of crop phenotypes across the whole growth period and can thus be useful for future genetic improvements in rice.

10.
Plant Phenomics ; 5: 0013, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37040292

RESUMEN

Verticillium wilt is one of the most critical cotton diseases, which is widely distributed in cotton-producing countries. However, the conventional method of verticillium wilt investigation is still manual, which has the disadvantages of subjectivity and low efficiency. In this research, an intelligent vision-based system was proposed to dynamically observe cotton verticillium wilt with high accuracy and high throughput. Firstly, a 3-coordinate motion platform was designed with the movement range 6,100 mm × 950 mm × 500 mm, and a specific control unit was adopted to achieve accurate movement and automatic imaging. Secondly, the verticillium wilt recognition was established based on 6 deep learning models, in which the VarifocalNet (VFNet) model had the best performance with a mean average precision (mAP) of 0.932. Meanwhile, deformable convolution, deformable region of interest pooling, and soft non-maximum suppression optimization methods were adopted to improve VFNet, and the mAP of the VFNet-Improved model improved by 1.8%. The precision-recall curves showed that VFNet-Improved was superior to VFNet for each category and had a better improvement effect on the ill leaf category than fine leaf. The regression results showed that the system measurement based on VFNet-Improved achieved high consistency with manual measurements. Finally, the user software was designed based on VFNet-Improved, and the dynamic observation results proved that this system was able to accurately investigate cotton verticillium wilt and quantify the prevalence rate of different resistant varieties. In conclusion, this study has demonstrated a novel intelligent system for the dynamic observation of cotton verticillium wilt on the seedbed, which provides a feasible and effective tool for cotton breeding and disease resistance research.

11.
Pest Manag Sci ; 79(7): 2591-2602, 2023 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-36883563

RESUMEN

BACKGROUND: Spatial-explicit weed information is critical for controlling weed infestation and reducing corn yield losses. The development of unmanned aerial vehicle (UAV)-based remote sensing presents an unprecedented opportunity for efficient, timely weed mapping. Spectral, textural, and structural measurements have been used for weed mapping, whereas thermal measurements-for example, canopy temperature (CT)-were seldom considered and used. In this study, we quantified the optimal combination of spectral, textural, structural, and CT measurements based on different machine-learning algorithms for weed mapping. RESULTS: CT improved weed-mapping accuracies as complementary information for spectral, textural, and structural features (up to 5% and 0.051 improvements in overall accuracy [OA] and Marco-F1, respectively). The fusion of textural, structural, and thermal features achieved the best performance in weed mapping (OA = 96.4%, Marco-F1 = 0.964), followed by the fusion of structural and thermal features (OA = 93.6%, Marco-F1 = 0.936). The Support Vector Machine-based model achieved the best performance in weed mapping, with 3.5% and 7.1% improvements in OA and 0.036 and 0.071 in Marco-F1 respectively, compared with the best models of Random Forest and Naïve Bayes Classifier. CONCLUSION: Thermal measurement can complement other types of remote-sensing measurements and improve the weed-mapping accuracy within the data-fusion framework. Importantly, integrating textural, structural, and thermal features achieved the best performance for weed mapping. Our study provides a novel method for weed mapping using UAV-based multisource remote sensing measurements, which is critical for ensuring crop production in precision agriculture. © 2023 The Authors. Pest Management Science published by John Wiley & Sons Ltd on behalf of Society of Chemical Industry.


Asunto(s)
Dispositivos Aéreos No Tripulados , Zea mays , Teorema de Bayes , Tecnología de Sensores Remotos/métodos , Agricultura
12.
Front Plant Sci ; 14: 1097725, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-36778701

RESUMEN

Introduction: Nondestructive detection of crop phenotypic traits in the field is very important for crop breeding. Ground-based mobile platforms equipped with sensors can efficiently and accurately obtain crop phenotypic traits. In this study, we propose a dynamic 3D data acquisition method in the field suitable for various crops by using a consumer-grade RGB-D camera installed on a ground-based movable platform, which can collect RGB images as well as depth images of crop canopy sequences dynamically. Methods: A scale-invariant feature transform (SIFT) operator was used to detect adjacent date frames acquired by the RGB-D camera to calculate the point cloud alignment coarse matching matrix and the displacement distance of adjacent images. The data frames used for point cloud matching were selected according to the calculated displacement distance. Then, the colored ICP (iterative closest point) algorithm was used to determine the fine matching matrix and generate point clouds of the crop row. The clustering method was applied to segment the point cloud of each plant from the crop row point cloud, and 3D phenotypic traits, including plant height, leaf area and projected area of individual plants, were measured. Results and Discussion: We compared the effects of LIDAR and image-based 3D reconstruction methods, and experiments were carried out on corn, tobacco, cottons and Bletilla striata in the seedling stage. The results show that the measurements of the plant height (R²= 0.9~0.96, RSME = 0.015~0.023 m), leaf area (R²= 0.8~0.86, RSME = 0.0011~0.0041 m 2 ) and projected area (R² = 0.96~0.99) have strong correlations with the manual measurement results. Additionally, 3D reconstruction results with different moving speeds and times throughout the day and in different scenes were also verified. The results show that the method can be applied to dynamic detection with a moving speed up to 0.6 m/s and can achieve acceptable detection results in the daytime, as well as at night. Thus, the proposed method can improve the efficiency of individual crop 3D point cloud data extraction with acceptable accuracy, which is a feasible solution for crop seedling 3D phenotyping outdoors.

13.
Plant Cell Environ ; 46(2): 549-566, 2023 02.
Artículo en Inglés | MEDLINE | ID: mdl-36354160

RESUMEN

Salt stress is a major limiting factor that severely affects the survival and growth of crops. It is important to understand the salt stress tolerance ability of Brassica napus and explore the underlying related genetic resources. We used a high-throughput phenotyping platform to quantify 2111 image-based traits (i-traits) of a natural population under three different salt stress conditions and an intervarietal substitution line (ISL) population under nine different stress conditions to monitor and evaluate the salt stress tolerance of B. napus over time. We finally identified 928 high-quality i-traits associated with the salt stress tolerance of B. napus. Moreover, we mapped the salt stress-related loci in the natural population via a genome-wide association study and performed a linkage analysis associated with the ISL population, respectively. These results revealed 234 candidate genes associated with salt stress response, and two novel candidate genes, BnCKX5 and BnERF3, were experimentally verified to regulate the salt stress tolerance of B. napus. This study demonstrates the feasibility of using high-throughput phenotyping-based quantitative trait loci mapping to accurately and comprehensively quantify i-traits associated with B. napus. The mapped loci could be used for genomics-assisted breeding to genetically improve the salt stress tolerance of B. napus.


Asunto(s)
Brassica napus , Sitios de Carácter Cuantitativo , Sitios de Carácter Cuantitativo/genética , Brassica napus/fisiología , Mapeo Cromosómico/métodos , Estudio de Asociación del Genoma Completo , Tolerancia a la Sal/genética
15.
Front Plant Sci ; 13: 1069849, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36561444

RESUMEN

With the completion of the coconut gene map and the gradual improvement of related molecular biology tools, molecular marker-assisted breeding of coconut has become the next focus of coconut breeding, and accurate coconut phenotypic traits measurement will provide technical support for screening and identifying the correspondence between genotype and phenotype. A Micro-CT system was developed to measure coconut fruits and seeds automatically and nondestructively to acquire the 3D model and phenotyping traits. A deeplabv3+ model with an Xception backbone was used to segment the sectional image of coconut fruits and seeds automatically. Compared with the structural-light system measurement, the mean absolute percentage error of the fruit volume and surface area measurements by the Micro-CT system was 1.87% and 2.24%, respectively, and the squares of the correlation coefficients were 0.977 and 0.964, respectively. In addition, compared with the manual measurements, the mean absolute percentage error of the automatic copra weight and total biomass measurements was 8.85% and 25.19%, respectively, and the adjusted squares of the correlation coefficients were 0.922 and 0.721, respectively. The Micro-CT system can nondestructively obtain up to 21 agronomic traits and 57 digital traits precisely.

16.
Front Plant Sci ; 13: 1028779, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36457523

RESUMEN

Three ecotypes of rapeseed, winter, spring, and semi-winter, have been formed to enable the plant to adapt to different geographic areas. Although several major loci had been found to contribute to the flowering divergence, the genomic footprints and associated dynamic plant architecture in the vegetative growth stage underlying the ecotype divergence remain largely unknown in rapeseed. Here, a set of 41 dynamic i-traits and 30 growth-related traits were obtained by high-throughput phenotyping of 171 diverse rapeseed accessions. Large phenotypic variation and high broad-sense heritability were observed for these i-traits across all developmental stages. Of these, 19 i-traits were identified to contribute to the divergence of three ecotypes using random forest model of machine learning approach, and could serve as biomarkers to predict the ecotype. Furthermore, we analyzed genomic variations of the population, QTL information of all dynamic i-traits, and genomic basis of the ecotype differentiation. It was found that 213, 237, and 184 QTLs responsible for the differentiated i-traits overlapped with the signals of ecotype divergence between winter and spring, winter and semi-winter, and spring and semi-winter, respectively. Of which, there were four common divergent regions between winter and spring/semi-winter and the strongest divergent regions between spring and semi-winter were found to overlap with the dynamic QTLs responsible for the differentiated i-traits at multiple growth stages. Our study provides important insights into the divergence of plant architecture in the vegetative growth stage among the three ecotypes, which was contributed to by the genetic differentiation, and might contribute to environmental adaption and yield improvement.

17.
Plant Methods ; 18(1): 138, 2022 Dec 15.
Artículo en Inglés | MEDLINE | ID: mdl-36522641

RESUMEN

BACKGROUND: Virtual plants can simulate the plant growth and development process through computer modeling, which assists in revealing plant growth and development patterns. Virtual plant visualization technology is a core part of virtual plant research. The major limitation of the existing plant growth visualization models is that the produced virtual plants are not realistic and cannot clearly reflect plant color, morphology and texture information. RESULTS: This study proposed a novel trait-to-image crop visualization tool named CropPainter, which introduces a generative adversarial network to generate virtual crop images corresponding to the given phenotypic information. CropPainter was first tested for virtual rice panicle generation as an example of virtual crop generation at the organ level. Subsequently, CropPainter was extended for visualizing crop plants (at the plant level), including rice, maize and cotton plants. The tests showed that the virtual crops produced by CropPainter are very realistic and highly consistent with the input phenotypic traits. The codes, datasets and CropPainter visualization software are available online. CONCLUSION: In conclusion, our method provides a completely novel idea for crop visualization and may serve as a tool for virtual crops, which can assist in plant growth and development research.

18.
Int J Mol Sci ; 23(21)2022 Nov 03.
Artículo en Inglés | MEDLINE | ID: mdl-36362251

RESUMEN

Pollen grains, the male gametophytes for reproduction in higher plants, are vulnerable to various stresses that lead to loss of viability and eventually crop yield. A conventional method for assessing pollen viability is manual counting after staining, which is laborious and hinders high-throughput screening. We developed an automatic detection tool (PollenDetect) to distinguish viable and nonviable pollen based on the YOLOv5 neural network, which is adjusted to adapt to the small target detection task. Compared with manual work, PollenDetect significantly reduced detection time (from approximately 3 min to 1 s for each image). Meanwhile, PollenDetect can maintain high detection accuracy. When PollenDetect was tested on cotton pollen viability, 99% accuracy was achieved. Furthermore, the results obtained using PollenDetect show that high temperature weakened cotton pollen viability, which is highly similar to the pollen viability results obtained using 2,3,5-triphenyltetrazolium formazan quantification. PollenDetect is an open-source software that can be further trained to count different types of pollen for research purposes. Thus, PollenDetect is a rapid and accurate system for recognizing pollen viability status, and is important for screening stress-resistant crop varieties for the identification of pollen viability and stress resistance genes during genetic breeding research.


Asunto(s)
Aprendizaje Profundo , Fitomejoramiento , Polen , Programas Informáticos , Calor
19.
Front Plant Sci ; 13: 968855, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36119566

RESUMEN

Tobacco is one of the important economic crops all over the world. Tobacco mosaic virus (TMV) seriously affects the yield and quality of tobacco leaves. The expression of TMV in tobacco leaves can be analyzed by detecting green fluorescence-related traits after inoculation with the infectious clone of TMV-GFP (Tobacco mosaic virus - green fluorescent protein). However, traditional methods for detecting TMV-GFP are time-consuming and laborious, and mostly require a lot of manual procedures. In this study, we develop a low-cost machine-vision-based phenotyping platform for the automatic evaluation of fluorescence-related traits in tobacco leaf based on digital camera and image processing. A dynamic monitoring experiment lasting 7 days was conducted to evaluate the efficiency of this platform using Nicotiana tabacum L. with a total of 14 samples, including the wild-type strain SR1 and 4 mutant lines generated by RNA interference technology. As a result, we found that green fluorescence area and brightness generally showed an increasing trend over time, and the trends were different among these SR1 and 4 mutant lines samples, where the maximum and minimum of green fluorescence area and brightness were mutant-4 and mutant-1 respectively. In conclusion, the platform can full-automatically extract fluorescence-related traits with the advantage of low-cost and high accuracy, which could be used in detecting dynamic changes of TMV-GFP in tobacco leaves.

20.
Front Plant Sci ; 13: 900408, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35937323

RESUMEN

High-throughput phenotyping of yield-related traits is meaningful and necessary for rice breeding and genetic study. The conventional method for rice yield-related trait evaluation faces the problems of rice threshing difficulties, measurement process complexity, and low efficiency. To solve these problems, a novel intelligent system, which includes an integrated threshing unit, grain conveyor-imaging units, threshed panicle conveyor-imaging unit, and specialized image analysis software has been proposed to achieve rice yield trait evaluation with high throughput and high accuracy. To improve the threshed panicle detection accuracy, the Region of Interest Align, Convolution Batch normalization activation with Leaky Relu module, Squeeze-and-Excitation unit, and optimal anchor size have been adopted to optimize the Faster-RCNN architecture, termed 'TPanicle-RCNN,' and the new model achieved F1 score 0.929 with an increase of 0.044, which was robust to indica and japonica varieties. Additionally, AI cloud computing was adopted, which dramatically reduced the system cost and improved flexibility. To evaluate the system accuracy and efficiency, 504 panicle samples were tested, and the total spikelet measurement error decreased from 11.44 to 2.99% with threshed panicle compensation. The average measuring efficiency was approximately 40 s per sample, which was approximately twenty times more efficient than manual measurement. In this study, an automatic and intelligent system for rice yield-related trait evaluation was developed, which would provide an efficient and reliable tool for rice breeding and genetic research.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...